Toward a Unified Theory of Sparse Dimensionality Reduction in Euclidean Space Jean Bourgain and Jelani Nelson
نویسنده
چکیده
Let Φ ∈ Rm×n be a sparse Johnson-Lindenstrauss transform [KN] with s non-zeroes per column. For T a subset of the unit sphere, ε ∈ (0, 1/2) given, we study settings for m, s required to ensure E Φ sup x∈T ∣∣‖Φx‖22 − 1∣∣ < ε, i.e. so that Φ preserves the norm of every x ∈ T simultaneously and multiplicatively up to 1 + ε. In particular, our most general theorem shows that it suffices to set m = Ω̃(γ 2(T ) + 1) and s = Ω̃(1) as long as s,m additionally satisfy a certain tradeoff condition that is governed by the geometry of T (and as we show for several examples of interest, is easy to verify). Here γ2 is Talagrand’s functional [Tal05], and we write f = Ω̃(g) to mean f ≥ Cg(ε−1 logn) for some constants C, c > 0. Our result can be seen as an extension to sparse Φ of the works [KM05, Gor88, MPTJ07] which were concerned with dense Φ having i.i.d. (sub)gaussian entries. Our work introduces a theory that qualitatively unifies several results related to the Johnson-Lindenstrauss lemma, subspace embeddings, and Fourier-based methods for obtaining matrices satisfying the restricted isometry property.
منابع مشابه
Dimensionality reduction with subgaussian matrices: a unified theory
We present a theory for Euclidean dimensionality reduction with subgaussian matrices which unifies several restricted isometry property and Johnson-Lindenstrauss type results obtained earlier for specific data sets. In particular, we recover and, in several cases, improve results for sets of sparse and structured sparse vectors, low-rank matrices and tensors, and smooth manifolds. In addition, ...
متن کاملCs 229r: Algorithms for Big Data 2 Dimensionality Reduction 2.2 Limitations of Dimensionality Reduction
In the last lecture we proved several space lower bounds for streaming algorithms using the communication complexity model, and some ideas from information theory. In this lecture we will move onto the next topic: dimensionality reduction. Dimensionality reduction is useful when solving high-dimensional computational geometry problems , such as: • clustering • nearest neighbors search • numeric...
متن کامل2D Dimensionality Reduction Methods without Loss
In this paper, several two-dimensional extensions of principal component analysis (PCA) and linear discriminant analysis (LDA) techniques has been applied in a lossless dimensionality reduction framework, for face recognition application. In this framework, the benefits of dimensionality reduction were used to improve the performance of its predictive model, which was a support vector machine (...
متن کاملHyperspectral Image Classification Based on the Fusion of the Features Generated by Sparse Representation Methods, Linear and Non-linear Transformations
The ability of recording the high resolution spectral signature of earth surface would be the most important feature of hyperspectral sensors. On the other hand, classification of hyperspectral imagery is known as one of the methods to extracting information from these remote sensing data sources. Despite the high potential of hyperspectral images in the information content point of view, there...
متن کاملDimensionality Reduction — Notes 3
where the inf is taken over all admissible sequences. We also let dX(T ) denote the diameter of T with respect to norm ‖·‖X . For the remainder of this section we make the definitions πrx = argminy∈Tr‖y − x‖X and ∆rx = πrx− πr−1x. Throughout this section we let ‖ · ‖ denote the `2→2 operator norm in the case of matrix arguments, and the `2 norm in the case of vector arguments. Krahmer, Mendelso...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2013